36 research outputs found

    Hardness Results for Structured Linear Systems

    Full text link
    We show that if the nearly-linear time solvers for Laplacian matrices and their generalizations can be extended to solve just slightly larger families of linear systems, then they can be used to quickly solve all systems of linear equations over the reals. This result can be viewed either positively or negatively: either we will develop nearly-linear time algorithms for solving all systems of linear equations over the reals, or progress on the families we can solve in nearly-linear time will soon halt

    Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple

    Full text link
    We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by a matrix with a sparse Cholesky factorization, the version of Gaussian elimination for symmetric matrices. This is the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. The crux of our analysis is a novel concentration bound for matrix martingales where the differences are sums of conditionally independent variables

    Faster Sparse Matrix Inversion and Rank Computation in Finite Fields

    Get PDF
    We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O(n^{2.2131}) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory

    Faster Sparse Matrix Inversion and Rank Computation in Finite Fields

    Full text link
    We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O(n2.2131)O\big(n^{2.2131}\big) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory

    Sampling Random Spanning Trees Faster than Matrix Multiplication

    Full text link
    We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in O~(n4/3m1/2+n2)\tilde{O}(n^{4/3}m^{1/2}+n^{2}) time (The O~()\tilde{O}(\cdot) notation hides polylog(n)\operatorname{polylog}(n) factors). The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω)O(n^\omega). For the special case of unweighted graphs, this improves upon the best previously known running time of O~(min{nω,mn,m4/3})\tilde{O}(\min\{n^{\omega},m\sqrt{n},m^{4/3}\}) for mn5/3m \gg n^{5/3} (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute ϵ\epsilon-approximate effective resistances for a set SS of vertex pairs via approximate Schur complements in O~(m+(n+S)ϵ2)\tilde{O}(m+(n + |S|)\epsilon^{-2}) time, without using the Johnson-Lindenstrauss lemma which requires O~(min{(m+S)ϵ2,m+nϵ4+Sϵ2})\tilde{O}( \min\{(m + |S|)\epsilon^{-2}, m+n\epsilon^{-4} +|S|\epsilon^{-2}\}) time. We combine this approximation procedure with an error correction procedure for handing edges where our estimate isn't sufficiently accurate
    corecore